732 research outputs found

    Methodological reflections on the evaluation of the implementation and adoption of national electronic health record systems

    Get PDF
    Copyright @ 2012, International Journal of Integrated Care (IJIC). This work is licensed under a (http://creativecommons.org/licenses/by/3.0) Creative Commons Attribution 3.0 Unported License.Introduction/purpose of presentation: Far-reaching policy commitments to information technology-centered transformations of healthcare systems have now been made in many countries. There is as yet little empirical evidence to justify such decisions, hence the need for rigorous independent evaluation of current implementation efforts. Such evaluations however pose a number of important challenges. This presentation has been designed as a part of a Panel based on our experience of evaluating the National Health Service’s (NHS) implementation of electronic health records (EHR) systems in hospitals throughout England. We discuss the methodological challenges encountered in planning and undertaking an evaluation of a program of this scale and reflect on why and how we adapted our evaluation approach—both conceptually and methodologically—in response to these challenges. Study design/population studied: Critical reflections on a multi-disciplinary and multi-facet independent evaluation of a national program to implement electronic health record systems into 12 ‘early wave’ NHS hospitals in England. Findings: Our initial plan was to employ a mixed methods longitudinal ‘before-during-after’ study design. We however found this unsustainable in the light of fluxes in policy, contractual issues and over-optimistic schedules for EHR deployments. More importantly, this research design failed adequately to address the core of multi-faceted evolving EHRs as understood by key stakeholders and as worked out in their distinct work settings. Thus conventional outcomes-centric evaluations may not easily scale-up when evaluating transformational programs and may indeed prove misleading. New assumptions concerning the implementation process of EHR need to be developed that recognize the constantly changing milieu of policy, product, projects and professions that are inherent to such national implementations. The approaches we subsequently developed substitute the positivist view that EHR initiatives are self-evident and self-contained interventions, which are amenable to traditional quantitative evaluations, to one that focuses on how they are understood by various stakeholders and made to work in specific contexts. These assumptions recast the role of evaluation towards an approach that explores and interprets processes of socio-technical change that surround EHR implementation and adoption as seen by multiple stakeholders. Conclusions and policy implications: There is likely to be an increase in politically-driven national programs of reform of healthcare based on information and communication technologies. Programs on such a scale are inherently complex with extended temporalities and extensive and dynamic sets of stakeholders. They are, in short, different and pose new evaluation challenges that previously formulated evaluation methods for health information systems cannot easily address. This calls for methodological innovation amongst research teams and their supporting bodies. We argue that evaluation of such system-wide transformation programs are likely to demand both breadth and depth of experience within a multidisciplinary research team, constant questioning of what is and what can be evaluated and how, and a particular way of working that emphasizes continuous dialogue and reflexivity. Making this transition is essential to enable evaluations that can usefully inform policy-making. Health policy experts urgently need to reassess the evaluation strategies they employ as they come to address national policies for system-wide transformation based on new electronic health infrastructures

    Quantifying simulator discrepancy in discrete-time dynamical simulators

    Get PDF
    When making predictions with complex simulators it can be important to quantify the various sources of uncertainty. Errors in the structural specification of the simulator, for example due to missing processes or incorrect mathematical specification, can be a major source of uncertainty, but are often ignored. We introduce a methodology for inferring the discrepancy between the simulator and the system in discrete-time dynamical simulators. We assume a structural form for the discrepancy function, and show how to infer the maximum likelihood parameter estimates using a particle filter embedded within a Monte Carlo expectation maximization (MCEM) algorithm. We illustrate the method on a conceptual rainfall runoff simulator (logSPM) used to model the Abercrombie catchment in Australia. We assess the simulator and discrepancy model on the basis of their predictive performance using proper scoring rules

    Derivations of variational gaussian process approximation framework

    Get PDF
    Recently, within the VISDEM project (EPSRC funded EP/C005848/1), a novel variational approximation framework has been developed for inference in partially observed, continuous space-time, diffusion processes. In this technical report all the derivations of the variational framework, from the initial work, are provided in detail to help the reader better understand the framework and its assumptions

    Variational mean-field algorithm for efficient inference in large systems of stochastic differential equations

    Get PDF
    This work introduces a Gaussian variational mean-field approximation for inference in dynamical systems which can be modeled by ordinary stochastic differential equations. This new approach allows one to express the variational free energy as a functional of the marginal moments of the approximating Gaussian process. A restriction of the moment equations to piecewise polynomial functions, over time, dramatically reduces the complexity of approximate inference for stochastic differential equation models and makes it comparable to that of discrete time hidden Markov models. The algorithm is demonstrated on state and parameter estimation for nonlinear problems with up to 1000 dimensional state vectors and compares the results empirically with various well-known inference methodologies

    Variational Markov Chain Monte Carlo for Bayesian smoothing of non-linear niffusions

    Get PDF
    In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient

    Modelling frontal discontinuities in wind fields

    Get PDF
    A Bayesian procedure for the retrieval of wind vectors over the ocean using satellite borne scatterometers requires realistic prior near-surface wind field models over the oceans. We have implemented carefully chosen vector Gaussian Process models; however in some cases these models are too smooth to reproduce real atmospheric features, such as fronts. At the scale of the scatterometer observations, fronts appear as discontinuities in wind direction. Due to the nature of the retrieval problem a simple discontinuity model is not feasible, and hence we have developed a constrained discontinuity vector Gaussian Process model which ensures realistic fronts. We describe the generative model and show how to compute the data likelihood given the model. We show the results of inference using the model with Markov Chain Monte Carlo methods on both synthetic and real data

    A new variational radial basis function approximation for inference in multivariate diffusions

    Get PDF
    In this paper we present a radial basis function based extension to a recently proposed variational algorithm for approximate inference for diffusion processes. Inference, for state and in particular (hyper-) parameters, in diffusion processes is a challenging and crucial task. We show that the new radial basis function approximation based algorithm converges to the original algorithm and has beneficial characteristics when estimating (hyper-)parameters. We validate our new approach on a nonlinear double well potential dynamical system

    Learning to live with Dale's principle: ANNs with separate excitatory and inhibitory units

    Get PDF
    The units in artificial neural networks (ANNs) can be thought of as abstractions of biological neurons, and ANNs are increasingly used in neuroscience research. However, there are many important differences between ANN units and real neurons. One of the most notable is the absence of Dale's principle, which ensures that biological neurons are either exclusively excitatory or inhibitory. Dale's principle is typically left out of ANNs because its inclusion impairs learning. This is problematic, because one of the great advantages of ANNs for neuroscience research is their ability to learn complicated, realistic tasks. Here, by taking inspiration from feedforward inhibitory interneurons in the brain we show that we can develop ANNs with separate populations of excitatory and inhibitory units that learn just as well as standard ANNs. We call these networks Dale's ANNs (DANNs). We present two insights that enable DANNs to learn well: (1) DANNs are related to normalization schemes, and can be initialized such that the inhibition centres and standardizes the excitatory activity, (2) updates to inhibitory neuron parameters should be scaled using corrections based on the Fisher Information matrix. These results demonstrate how ANNs that respect Dale's principle can be built without sacrificing learning performance, which is important for future work using ANNs as models of the brain. The results may also have interesting implications for how inhibitory plasticity in the real brain operates
    • …
    corecore